Goto

Collaborating Authors

 noise perturbation


Sensory robustness through top-down feedback and neural stochasticity in recurrent vision models

Greco, Antonino, D'Alessandro, Marco, Friston, Karl J., Pezzulo, Giovanni, Siegel, Markus

arXiv.org Artificial Intelligence

Biological systems leverage top-down feedback for visual processing, yet most artificial vision models succeed in image classification using purely feedforward or recurrent architectures, calling into question the functional significance of descending cortical pathways. Here, we trained convolutional recurrent neural networks (ConvRNN) on image classification in the presence or absence of top-down feedback projections to elucidate the specific computational contributions of those feedback pathways. We found that ConvRNNs with top-down feedback exhibited remarkable speed-accuracy trade-off and robustness to noise perturbations and adversarial attacks, but only when they were trained with stochastic neural variability, simulated by randomly silencing single units via dropout. By performing detailed analyses to identify the reasons for such benefits, we observed that feedback information substantially shaped the representational geometry of the post-integration layer, combining the bottom-up and top-down streams, and this effect was amplified by dropout. Moreover, feedback signals coupled with dropout optimally constrained network activity onto a low-dimensional manifold and encoded object information more efficiently in out-of-distribution regimes, with top-down information stabilizing the representational dynamics at the population level. Together, these findings uncover a dual mechanism for resilient sensory coding. On the one hand, neural stochasticity prevents unit-level co-adaptation albeit at the cost of more chaotic dynamics. On the other hand, top-down feedback harnesses high-level information to stabilize network activity on compact low-dimensional manifolds.


Measuring the Robustness of Audio Deepfake Detectors

Li, Xiang, Chen, Pin-Yu, Wei, Wenqi

arXiv.org Artificial Intelligence

Deepfakes have become a universal and rapidly intensifying concern of generative AI across various media types such as images, audio, and videos. Among these, audio deepfakes have been of particular concern due to the ease of high-quality voice synthesis and distribution via platforms such as social media and robocalls. Consequently, detecting audio deepfakes plays a critical role in combating the growing misuse of AI-synthesized speech. However, real-world scenarios often introduce various audio corruptions, such as noise, modification, and compression, that may significantly impact detection performance. This work systematically evaluates the robustness of 10 audio deepfake detection models against 16 common corruptions, categorized into noise perturbation, audio modification, and compression. Using both traditional deep learning models and state-of-the-art foundation models, we make four unique observations. First, our findings show that while most models demonstrate strong robustness to noise, they are notably more vulnerable to modifications and compression, especially when neural codecs are applied. Second, speech foundation models generally outperform traditional models across most scenarios, likely due to their self-supervised learning paradigm and large-scale pre-training. Third, our results show that increasing model size improves robustness, albeit with diminishing returns. Fourth, we demonstrate how targeted data augmentation during training can enhance model resilience to unseen perturbations. A case study on political speech deepfakes highlights the effectiveness of foundation models in achieving high accuracy under real-world conditions. These findings emphasize the importance of developing more robust detection frameworks to ensure reliability in practical deployment settings.


Deep Learning for predicting rate-induced tipping

Huang, Yu, Bathiany, Sebastian, Ashwin, Peter, Boers, Niklas

arXiv.org Artificial Intelligence

Nonlinear dynamical systems exposed to changing forcing can exhibit catastrophic transitions between alternative and often markedly different states. The phenomenon of critical slowing down (CSD) can be used to anticipate such transitions if caused by a bifurcation and if the change in forcing is slow compared to the internal time scale of the system. However, in many real-world situations, these assumptions are not met and transitions can be triggered because the forcing exceeds a critical rate. For example, given the pace of anthropogenic climate change in comparison to the internal time scales of key Earth system components, such as the polar ice sheets or the Atlantic Meridional Overturning Circulation, such rate-induced tipping poses a severe risk. Moreover, depending on the realisation of random perturbations, some trajectories may transition across an unstable boundary, while others do not, even under the same forcing. CSD-based indicators generally cannot distinguish these cases of noise-induced tipping versus no tipping. This severely limits our ability to assess the risks of tipping, and to predict individual trajectories. To address this, we make a first attempt to develop a deep learning framework to predict transition probabilities of dynamical systems ahead of rate-induced transitions. Our method issues early warnings, as demonstrated on three prototypical systems for rate-induced tipping, subjected to time-varying equilibrium drift and noise perturbations. Exploiting explainable artificial intelligence methods, our framework captures the fingerprints necessary for early detection of rate-induced tipping, even in cases of long lead times. Our findings demonstrate the predictability of rate-induced and noise-induced tipping, advancing our ability to determine safe operating spaces for a broader class of dynamical systems than possible so far.


NoiseBoost: Alleviating Hallucination with Noise Perturbation for Multimodal Large Language Models

Wu, Kai, Jiang, Boyuan, Jiang, Zhengkai, He, Qingdong, Luo, Donghao, Wang, Shengzhi, Liu, Qingwen, Wang, Chengjie

arXiv.org Artificial Intelligence

Multimodal large language models (MLLMs) contribute a powerful mechanism to understanding visual information building on large language models. However, MLLMs are notorious for suffering from hallucinations, especially when generating lengthy, detailed descriptions for images. Our analysis reveals that hallucinations stem from the inherent summarization mechanism of large language models, leading to excessive dependence on linguistic tokens while neglecting vision information. In this paper, we propose NoiseBoost, a broadly applicable and simple method for alleviating hallucinations for MLLMs through the integration of noise feature perturbations. Noise perturbation acts as a regularizer, facilitating a balanced distribution of attention weights among visual and linguistic tokens. Despite its simplicity, NoiseBoost consistently enhances the performance of MLLMs across common training strategies, including supervised fine-tuning and reinforcement learning. Further, NoiseBoost pioneerly enables semi-supervised learning for MLLMs, unleashing the power of unlabeled data. Comprehensive experiments demonstrate that NoiseBoost improves dense caption accuracy by 8.1% with human evaluation and achieves comparable results with 50% of the data by mining unlabeled data. Code and models are available at https://kaiwu5.github.io/noiseboost.


Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing

Elboher, Yizhak, Elsaleh, Raya, Isac, Omri, Ducoffe, Mélanie, Galametz, Audrey, Povéda, Guillaume, Boumazouza, Ryma, Cohen, Noémie, Katz, Guy

arXiv.org Artificial Intelligence

As deep neural networks (DNNs) are becoming the prominent solution for many computational problems, the aviation industry seeks to explore their potential in alleviating pilot workload and in improving operational safety. However, the use of DNNs in this type of safety-critical applications requires a thorough certification process. This need can be addressed through formal verification, which provides rigorous assurances -- e.g.,~by proving the absence of certain mispredictions. In this case-study paper, we demonstrate this process using an image-classifier DNN currently under development at Airbus and intended for use during the aircraft taxiing phase. We use formal methods to assess this DNN's robustness to three common image perturbation types: noise, brightness and contrast, and some of their combinations. This process entails multiple invocations of the underlying verifier, which might be computationally expensive; and we therefore propose a method that leverages the monotonicity of these robustness properties, as well as the results of past verification queries, in order to reduce the overall number of verification queries required by nearly 60%. Our results provide an indication of the level of robustness achieved by the DNN classifier under study, and indicate that it is considerably more vulnerable to noise than to brightness or contrast perturbations.


Effective Slogan Generation with Noise Perturbation

Kim, Jongeun, Kim, MinChung, Kim, Taehwan

arXiv.org Artificial Intelligence

Slogans play a crucial role in building the brand's identity of the firm. A slogan is expected to reflect firm's vision and brand's value propositions in memorable and likeable ways. Automating the generation of slogans with such characteristics is challenging. Previous studies developted and tested slogan generation with syntactic control and summarization models which are not capable of generating distinctive slogans. We introduce a a novel apporach that leverages pre-trained transformer T5 model with noise perturbation on newly proposed 1:N matching pair dataset. This approach serves as a contributing fator in generting distinctive and coherent slogans. Turthermore, the proposed approach incorporates descriptions about the firm and brand into the generation of slogans. We evaluate generated slogans based on ROUGE1, ROUGEL and Cosine Similarity metrics and also assess them with human subjects in terms of slogan's distinctiveness, coherence, and fluency. The results demonstrate that our approach yields better performance than baseline models and other transformer-based models.


Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

Yuan, Xin, Ni, Wei, Ding, Ming, Wei, Kang, Li, Jun, Poor, H. Vincent

arXiv.org Artificial Intelligence

While preserving the privacy of federated learning (FL), differential privacy (DP) inevitably degrades the utility (i.e., accuracy) of FL due to model perturbations caused by DP noise added to model updates. Existing studies have considered exclusively noise with persistent root-mean-square amplitude and overlooked an opportunity of adjusting the amplitudes to alleviate the adverse effects of the noise. This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of FL and retain the capability of adjusting the learning performance. Specifically, we propose a geometric series form for the noise amplitude and reveal analytically the dependence of the series on the number of global aggregations and the $(\epsilon,\delta)$-DP requirement. We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise. Another important aspect is an upper bound developed for the loss function of a multi-layer perceptron (MLP) trained by FL running the new DP mechanism. Accordingly, the optimal number of global aggregations is obtained, balancing the learning and privacy. Extensive experiments are conducted using MLP, supporting vector machine, and convolutional neural network models on four public datasets. The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.


A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images

Nagabushanam, Durga Shree, Mathew, Steve, Chowdhary, Chiranji Lal

arXiv.org Artificial Intelligence

Neural Networks are prone to having lesser accuracy in the classification of images with noise perturbation. Convolutional Neural Networks, CNNs are known for their unparalleled accuracy in the classification of benign images. But our study shows that they are extremely vulnerable to noise addition while Feed-forward Neural Networks, FNNs show very less correspondence with noise perturbation, maintaining their accuracy almost undisturbed. FNNs are observed to be better at classifying noise-intensive, single-channeled images that are just sheer noise to human vision. In our study, we have used the hand-written digits dataset, MNIST with the following architectures: FNNs with 1 and 2 hidden layers and CNNs with 3, 4, 6 and 8 convolutions and analyzed their accuracies. FNNs stand out to show that irrespective of the intensity of noise, they have a classification accuracy of more than 85%. In our analysis of CNNs with this data, the deceleration of classification accuracy of CNN with 8 convolutions was half of that of the rest of the CNNs. Correlation analysis and mathematical modelling of the accuracy trends act as roadmaps to these conclusions.


Evaluating generative networks using Gaussian mixtures of image features

Luzi, Lorenzo, Marrero, Carlos Ortiz, Wynar, Nile, Baraniuk, Richard G., Henry, Michael J.

arXiv.org Artificial Intelligence

We develop a measure for evaluating the performance of generative networks given two sets of images. A popular performance measure currently used to do this is the Fr\'echet Inception Distance (FID). FID assumes that images featurized using the penultimate layer of Inception-v3 follow a Gaussian distribution, an assumption which cannot be violated if we wish to use FID as a metric. However, we show that Inception-v3 features of the ImageNet dataset are not Gaussian; in particular, every single marginal is not Gaussian. To remedy this problem, we model the featurized images using Gaussian mixture models (GMMs) and compute the 2-Wasserstein distance restricted to GMMs. We define a performance measure, which we call WaM, on two sets of images by using Inception-v3 (or another classifier) to featurize the images, estimate two GMMs, and use the restricted $2$-Wasserstein distance to compare the GMMs. We experimentally show the advantages of WaM over FID, including how FID is more sensitive than WaM to imperceptible image perturbations. By modelling the non-Gaussian features obtained from Inception-v3 as GMMs and using a GMM metric, we can more accurately evaluate generative network performance.


Sensitivity of Deep Convolutional Networks to Gabor Noise

Co, Kenneth T., Muñoz-González, Luis, Lupu, Emil C.

arXiv.org Machine Learning

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.